Friday, March 28, 2003

Do you believe your error bars?



Computing uncertainty estimates from Monte Carlo data usually assumes the data is normal (or can be sufficiently reblocked to be normal, according to the central limit theorem). But what if we don't have quite that much data - what effect would the underlying distribution have on the uncertainty estimates (error bars)?


Mervlyn Moodley looks at this question in the paper The Lognormal Distribution and Quantum Monte Carlo Data.


The interesting figures seem to be figures 3 and 9. In Figure 3, we see the confidence intervals change as the data is reblocked (and hence comes closer to normal). Figure 9 is generated from a real simulation, and it seems there is very little difference between the error bars computed assuming normality and those taking the underlying lognormal distribution into account.

Monday, March 17, 2003

Bilinear QMC


I'm reading a recent paper by Arias de Saavedra and Kalos titled "Bilinear diffusion quantum Monte Carlo methods" (PRE 67, 026708). They present an algorithm that uses a pair of walkers (the bilinear part) to sample the square of the ground state wavefunction (rather than first power of the wavefunction like DMC).


They claim it's useful for computing unbiased expectation values (see my post from Monday, Feb. 24 - although I'm not sure it would help with derivative operators) and energy differences.


The part I find most intriguing is the way they use importance sampling - they get reasonable results for hydrogen without it. And when they do use importance sampling, it's only to remove singularities due to cusps. I would guess that any successful scheme (ie, able to scale reasonably with system size) needs to use as an accurate a trial wavefunction as possible to guide the sampling.


The more I study the paper, the less I understand it. I guess the first step is to try reproduce their results for the 1-D harmonic oscillator.

Tuesday, March 11, 2003

Steal this research idea


Automatic control of Monte Carlo simulations - I want to know how to do it, but I don't necessarily want to figure it out myself. :)

  • Determining equilibrium and phase transitions.
    Typically the particles are started in a lattice or random configuration and let run a whlie to equilibrate. I want automatic detection of equilibrium, which is especially tricky near phase boundaries. Also near phase transitions, the system can suddenly change phases.
  • Adjustment of parameters for optimum efficiency
    The key quantity affecting the efficiency is the correlation time, but it's very noisy. Is there some other quantity that we could use that would behave similarly, but with less noise? Since this will only affect the efficiency, approximations are quite okay. Like assuming some analytic form for the autocorrelation function with a small number of parameters and fitting to it?


    Also note that SGA-like algorithms can be used for control as well as optimization. The decay parameter reaches a constant to follow the system, rather than continuing to decrease, as in optimization.

  • Better control of DMC population

    Occasionally, I have trouble with the number of walkers increasing rapidly. In my codes, this means it exceeds some maximum value and stops the program. Sometimes the number of walkers jumps to a large but stable value, and the energy becomes far too low and clearly wrong. The hard part is that population control introduces a bias, and can't be too intrusive.


    For these items, are there any concepts from control theory or signal processing that would be useful?

Tuesday, March 04, 2003

Dynamic Monte Carlo


Checking the arxiv.org preprint server, there was an article titled An Introduction To Monte Carlo Simulations Of Surface Reactions by A.P.J. Jansen, which describes Dynamic Monte Carlo methods (Kinetic Monte Carlo is similar). One important input is transition probabilities. On page 21, "Estimates of the error made using DFT for such systems are at least about 10 kJ/mol."


Can QMC do better?